Telegram Group & Telegram Channel
https://arxiv.org/abs/2305.18290 #llm #ai

今天深入学习了 DPO,再次感叹扎实的数学功底对 AI/ML Research 的重要性……

原始的 RLHF 是用 pairwise human preference data(A 和 B 哪个更好)去训练一个 reward model,然后用 RL 来训练主 policy model,objective 是 minimize negative log likelihood + regularization(比如 PPO 就是通过新旧 policy 之间的 KL Divergence 来做 regularization)。这样的缺点在于 RL 是出了名的难搞,而且还需要一个 critic model 来预测 reward,使得整个系统的复杂性很高。

DPO 的思路是,观察到 RLHF 的 objective 本质上是 minimize loss over (latent) reward function,通过一番 reparameterization 等数学推导,重新设计了一个 minimize loss over policy 的 objective,绕过了中间这个 reward model,让 gradient update 直接增加 policy model 生成 winner response 的概率并降低 loser response 的概率,大幅简化了流程。

拓展阅读:
- KTO: 更进一步,不需要 pairwise comparison,只用对 individual example 的 upvote/downvote 也可以学习到 preference。
- IPO: 解决 DPO 容易 overfit 的问题。



tg-me.com/LinghaoCh/938
Create:
Last Update:

https://arxiv.org/abs/2305.18290 #llm #ai

今天深入学习了 DPO,再次感叹扎实的数学功底对 AI/ML Research 的重要性……

原始的 RLHF 是用 pairwise human preference data(A 和 B 哪个更好)去训练一个 reward model,然后用 RL 来训练主 policy model,objective 是 minimize negative log likelihood + regularization(比如 PPO 就是通过新旧 policy 之间的 KL Divergence 来做 regularization)。这样的缺点在于 RL 是出了名的难搞,而且还需要一个 critic model 来预测 reward,使得整个系统的复杂性很高。

DPO 的思路是,观察到 RLHF 的 objective 本质上是 minimize loss over (latent) reward function,通过一番 reparameterization 等数学推导,重新设计了一个 minimize loss over policy 的 objective,绕过了中间这个 reward model,让 gradient update 直接增加 policy model 生成 winner response 的概率并降低 loser response 的概率,大幅简化了流程。

拓展阅读:
- KTO: 更进一步,不需要 pairwise comparison,只用对 individual example 的 upvote/downvote 也可以学习到 preference。
- IPO: 解决 DPO 容易 overfit 的问题。

BY Parallel Experiments




Share with your friend now:
tg-me.com/LinghaoCh/938

View MORE
Open in Telegram


Parallel Experiments Telegram | DID YOU KNOW?

Date: |

Export WhatsApp stickers to Telegram on iPhone

You can’t. What you can do, though, is use WhatsApp’s and Telegram’s web platforms to transfer stickers. It’s easy, but might take a while.Open WhatsApp in your browser, find a sticker you like in a chat, and right-click on it to save it as an image. The file won’t be a picture, though—it’s a webpage and will have a .webp extension. Don’t be scared, this is the way. Repeat this step to save as many stickers as you want.Then, open Telegram in your browser and go into your Saved messages chat. Just as you’d share a file with a friend, click the Share file button on the bottom left of the chat window (it looks like a dog-eared paper), and select the .webp files you downloaded. Click Open and you’ll see your stickers in your Saved messages chat. This is now your sticker depository. To use them, forward them as you would a message from one chat to the other: by clicking or long-pressing on the sticker, and then choosing Forward.

What is Telegram Possible Future Strategies?

Cryptoassets enthusiasts use this application for their trade activities, and they may make donations for this cause.If somehow Telegram do run out of money to sustain themselves they will probably introduce some features that will not hinder the rudimentary principle of Telegram but provide users with enhanced and enriched experience. This could be similar to features where characters can be customized in a game which directly do not affect the in-game strategies but add to the experience.

Parallel Experiments from sg


Telegram Parallel Experiments
FROM USA